It might seem like you should memorize them and their arguments
Don’t
There are too many. Instead, focus on the logic/grammar
First comes the name
Then the ( opens it all up
Then I give arguments with argument = what I want
Then I close it off with )
If you get stuck, Google things
I don’t recommend using chatGPT for these classes
It’s going to give you some crazy shit, and works when you know how to give a good prompt
Quantifying likelihood under the null hypothesis
Count how many heads you got
nHeads <-sum( twentyOneFlips =="HEADS" ) # count the number of HEADS out of 21print( nHeads ) # print the number of heads
[1] 12
Quantifying likelihood under the null hypothesis
Testing the Null Hypothesis with Simulation
Now, do this a whole bunch of times.
Using a loop
In each iteration, the loop will create a single simulated sample:
# setting up the simulationnSims <-10000# number of simulationsaCoin <-c("HEADS", "TAILS") # create a "coin"nHeads <-rep(NA, times=nSims) # an empty vector to hold simulation results# loop to conduct simulationfor (i in1:nSims) {twentyOneFlips <-sample( aCoin, size=21, replace=TRUE ) # flip 21 coinsnHeads[i] <-sum( twentyOneFlips =="HEADS" ) # count and store the number of HEADS}
You may notice there is no “output” for this simulation (i.e., nothing was printed).
That was deliberate. And good!
It would be practically useless to print the results for each sample.
Instead, we store the result of interest from each sample in the nHeads vector.
A fair coin, flipped 21 times, only has a 2%chance of landing on HEADS 16 or more times.
But you still can’t be 100% certain
There is a chance you sibling just got lucky.
How could you be more certain?
Collect more data
Flip it another 100 times or so and tally the results
The bigger your sample size, the more power you have to detect systematic differences.
You could also look for other evidence
Has your brother been searching the internet for ways to rig coin flips?
But, there will always be some uncertainty. That’s just how it is.
“Rejecting” the Null Hypothesis
Traditionally (arbitrarily) researchers will often declare a result “significant” if p < .05 (for a two-tailed test)
When a result is less than p < .05, researchers will often say that they can “reject the null hypothesis”
Thus, there is tendency for people to consider results with p-values less than .05 as “true” and dismiss those with p-values greater than .05 as noise.
We should be more nuanced as researchers by recognizing that evidence comes in varying strengths and to treat each piece of evidence accordingly.
I don’t think the “accept”/“reject” language helps.
Instead, I prefer language that I think is more accurate:
Evidence is either consistent with or inconsistent with the null hypothesis